Text copied to clipboard!

Title

Text copied to clipboard!

Data Cloud Engineer

Description

Text copied to clipboard!
We are looking for a Data Cloud Engineer to join our growing team of data and analytics experts. The ideal candidate will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. The Data Cloud Engineer will support our software developers, database architects, data analysts, and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. The right candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. The ideal candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. Key responsibilities include building and maintaining scalable data pipelines, working with stakeholders to assist with data-related technical issues, and supporting data infrastructure needs. The Data Cloud Engineer will also be responsible for implementing data governance and security best practices, ensuring data quality, and collaborating with data scientists and analysts to improve data models that feed business intelligence tools. This role requires strong experience with cloud platforms such as AWS, Azure, or Google Cloud Platform, and proficiency in programming languages such as Python, Java, or Scala. Familiarity with big data tools like Hadoop, Spark, Kafka, and data warehousing solutions such as Snowflake, Redshift, or BigQuery is essential. The candidate should also have experience with workflow orchestration tools like Airflow or Luigi. If you are passionate about data, cloud technologies, and building robust data systems, we encourage you to apply and become a part of our innovative and collaborative team.

Responsibilities

Text copied to clipboard!
  • Design, build, and maintain scalable data pipelines
  • Develop and optimize data architecture and data flow
  • Collaborate with data scientists and analysts to support data needs
  • Implement data governance and security best practices
  • Monitor and troubleshoot data pipeline performance
  • Work with cloud platforms to manage data infrastructure
  • Ensure data quality and consistency across systems
  • Automate data integration and transformation processes
  • Support real-time and batch data processing
  • Document data systems and processes

Requirements

Text copied to clipboard!
  • Bachelor’s degree in Computer Science, Engineering, or related field
  • 3+ years of experience in data engineering or similar role
  • Proficiency in Python, Java, or Scala
  • Experience with cloud platforms like AWS, Azure, or GCP
  • Knowledge of big data tools such as Spark, Hadoop, or Kafka
  • Familiarity with data warehousing solutions like Snowflake or Redshift
  • Experience with workflow orchestration tools like Airflow or Luigi
  • Strong understanding of data modeling and ETL processes
  • Excellent problem-solving and communication skills
  • Ability to work independently and in a team environment

Potential interview questions

Text copied to clipboard!
  • What cloud platforms have you worked with in previous roles?
  • Can you describe a data pipeline you built and the technologies used?
  • How do you ensure data quality and consistency?
  • What experience do you have with big data tools like Spark or Kafka?
  • Have you implemented data governance or security measures before?
  • How do you handle performance issues in data pipelines?
  • What is your experience with data warehousing solutions?
  • How do you collaborate with data scientists and analysts?
  • What programming languages are you most comfortable with?
  • Can you describe a challenging data engineering problem you solved?